4 research outputs found

    Cardiovascular Risk Stratification in Diabetic Retinopathy via Atherosclerotic Pathway in COVID-19/non-COVID-19 Frameworks using Artificial Intelligence Paradigm: A Narrative Review

    Get PDF
    Diabetes is one of the main causes of the rising cases of blindness in adults. This microvascular complication of diabetes is termed diabetic retinopathy (DR) and is associated with an expanding risk of cardiovascular events in diabetes patients. DR, in its various forms, is seen to be a powerful indicator of atherosclerosis. Further, the macrovascular complication of diabetes leads to coronary artery disease (CAD). Thus, the timely identification of cardiovascular disease (CVD) complications in DR patients is of utmost importance. Since CAD risk assessment is expensive for lowincome countries, it is important to look for surrogate biomarkers for risk stratification of CVD in DR patients. Due to the common genetic makeup between the coronary and carotid arteries, lowcost, high-resolution imaging such as carotid B-mode ultrasound (US) can be used for arterial tissue characterization and risk stratification in DR patients. The advent of artificial intelligence (AI) techniques has facilitated the handling of large cohorts in a big data framework to identify atherosclerotic plaque features in arterial ultrasound. This enables timely CVD risk assessment and risk stratification of patients with DR. Thus, this review focuses on understanding the pathophysiology of DR, retinal and CAD imaging, the role of surrogate markers for CVD, and finally, the CVD risk stratification of DR patients. The review shows a step-by-step cyclic activity of how diabetes and atherosclerotic disease cause DR, leading to the worsening of CVD. We propose a solution to how AI can help in the identification of CVD risk. Lastly, we analyze the role of DR/CVD in the COVID-19 framework

    Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm

    No full text
    Motivation: Brain or central nervous system cancer is the tenth leading cause of death in men and women. Even though brain tumour is not considered as the primary cause of mortality worldwide, 40% of other types of cancer (such as lung or breast cancers) are transformed into brain tumours due to metastasis. Although the biopsy is considered as the gold standard for cancer diagnosis, it poses several challenges such as low sensitivity/specificity, risk during the biopsy procedure, and relatively long waiting times for the biopsy results. Due to an increase in the sheer volume of patients with brain tumours, there is a need for a non-invasive, automatic computer-aided diagnosis tool that can automatically diagnose and estimate the grade of a tumour accurately within a few seconds. Method: Five clinically relevant multiclass datasets (two-, three-, four-, five-, and six-class) were designed. A transfer-learning-based Artificial Intelligence paradigm using a Convolutional Neural Network (CCN) was proposed and led to higher performance in brain tumour grading/classification using magnetic resonance imaging (MRI) data. We benchmarked the transfer-learning-based CNN model against six different machine learning (ML) classification methods, namely Decision Tree, Linear Discrimination, Naive Bayes, Support Vector Machine, K-nearest neighbour, and Ensemble. Results: The CNN-based deep learning (DL) model outperforms the six types of ML models when considering five types of multiclass tumour datasets. These five types of data are two-, three-, four-, five, and six-class. The CNN-based AlexNet transfer learning system yielded mean accuracies derived from three kinds of cross-validation protocols (K2, K5, and K10) of 100, 95.97, 96.65, 87.14, and 93.74%, respectively. The mean areas under the curve of DL and ML were found to be 0.99 and 0.87, respectively, for p < 0.0001, and DL showed a 12.12% improvement over ML. Multiclass datasets were benchmarked against the TT protocol (where training and testing samples are the same). The optimal model was validated using a statistical method of a tumour separation index and verified on synthetic data consisting of eight classes. Conclusion: The transfer-learning-based AI system is useful in multiclass brain tumour grading and shows better performance than ML systems

    Multicenter Study on COVID-19 Lung Computed Tomography Segmentation with varying Glass Ground Opacities using Unseen Deep Learning Artificial Intelligence Paradigms: COVLIAS 1.0 Validation

    No full text
    Variations in COVID-19 lesions such as glass ground opacities (GGO), consolidations, and crazy paving can compromise the ability of solo-deep learning (SDL) or hybrid-deep learning (HDL) artificial intelligence (AI) models in predicting automated COVID-19 lung segmentation in Computed Tomography (CT) from unseen data leading to poor clinical manifestations. As the first study of its kind, “COVLIAS 1.0-Unseen” proves two hypotheses, (i) contrast adjustment is vital for AI, and (ii) HDL is superior to SDL. In a multicenter study, 10,000 CT slices were collected from 72 Italian (ITA) patients with low-GGO, and 80 Croatian (CRO) patients with high-GGO. Hounsfield Units (HU) were automatically adjusted to train the AI models and predict from test data, leading to four combinations—two Unseen sets: (i) train-CRO:test-ITA, (ii) train-ITA:test-CRO, and two Seen sets: (iii) train-CRO:test-CRO, (iv) train-ITA:test-ITA. COVILAS used three SDL models: PSPNet, SegNet, UNet and six HDL models: VGG-PSPNet, VGG-SegNet, VGG-UNet, ResNet-PSPNet, ResNet-SegNet, and ResNet-UNet. Two trained, blinded senior radiologists conducted ground truth annotations. Five types of performance metrics were used to validate COVLIAS 1.0-Unseen which was further benchmarked against MedSeg, an open-source web-based system. After HU adjustment for DS and JI, HDL (Unseen AI) > SDL (Unseen AI) by 4% and 5%, respectively. For CC, HDL (Unseen AI) > SDL (Unseen AI) by 6%. The COVLIAS-MedSeg difference was < 5%, meeting regulatory guidelines.Unseen AI was successfully demonstrated using automated HU adjustment. HDL was found to be superior to SDL

    Eight pruning deep learning models for low storage and high-speed COVID-19 computed tomography lung segmentation and heatmap-based lesion localization: A multicenter study using COVLIAS 2.0

    No full text
    Background: COVLIAS 1.0: an automated lung segmentation was designed for COVID-19 diagnosis. It has issues related to storage space and speed. This study shows that COVLIAS 2.0 uses pruned AI (PAI) networks for improving both storage and speed, wiliest high performance on lung segmentation and lesion localization. Method: ology: The proposed study uses multicenter ∌9,000 CT slices from two different nations, namely, CroMed from Croatia (80 patients, experimental data), and NovMed from Italy (72 patients, validation data). We hypothesize that by using pruning and evolutionary optimization algorithms, the size of the AI models can be reduced significantly, ensuring optimal performance. Eight different pruning techniques (i) differential evolution (DE), (ii) genetic algorithm (GA), (iii) particle swarm optimization algorithm (PSO), and (iv) whale optimization algorithm (WO) in two deep learning frameworks (i) Fully connected network (FCN) and (ii) SegNet were designed. COVLIAS 2.0 was validated using “Unseen NovMed” and benchmarked against MedSeg. Statistical tests for stability and reliability were also conducted. Results: Pruning algorithms (i) FCN-DE, (ii) FCN-GA, (iii) FCN–PSO, and (iv) FCN-WO showed improvement in storage by 92.4%, 95.3%, 98.7%, and 99.8% respectively when compared against solo FCN, and (v) SegNet-DE, (vi) SegNet-GA, (vii) SegNet-PSO, and (viii) SegNet-WO showed improvement by 97.1%, 97.9%, 98.8%, and 99.2% respectively when compared against solo SegNet. AUC > 0.94 (p < 0.0001) on CroMed and > 0.86 (p < 0.0001) on NovMed data set for all eight EA model. PAI <0.25 s per image. DenseNet-121-based Grad-CAM heatmaps showed validation on glass ground opacity lesions. Conclusions: Eight PAI networks that were successfully validated are five times faster, storage efficient, and could be used in clinical settings
    corecore